Round Overview
Discuss this match
This Tuesday (at least in my time-zone :-) ) offered a pretty tough and exciting match, where correctly solving the Medium problem could ensure a very nice position in the final standings, in both Divisions.
In Division 2, newcomer _peanut dominated the match with all 3 problems solved and very fast submission on 600pt problem. With only 26 correct submissions it turned out to be a bigger challenge than 950pt problem with 45 correct submissions. In a close race came bg95 on the second place and razimantv on the third place.
Watching the battle in Division 1, one could see many different strategies, impressive performances and also surprising failures. Right from the beginning, Petr and lyrically went for 1000pt problem which turned out to be very hard and tricky, costing both of them too much of those precious 75 minutes. On the other hand, the rest of the coders opened the problems in the expected way 300-500-1000 and tourist’s first submission of the round was also very expected event :-)
By the end of the coding phase 6 coders have submitted 1000pt problem, but only one of them turned out to be correct. ainu7, the only person solving all 3 problems, with very nice time on all of them and later scoring 275 points on challenges, was the absolute winner of the round. More than 650 points behind ainu7 came LayCurse and bmerry on the second and third place, respectively.
UnsortedSequence
Problem Details
Used as: Division Two - Level One:
Value | 250 |
Submission Rate | 1067 / 1155 (92.38%) |
Success Rate | 961 / 1067 (90.07%) |
High Score | adams2012 for 249.96 points (0 mins 22 secs) |
Average Score | 199.60 (for 961 correct submissions) |
Lets try to list all the permutations of the sequence {1,2,3,3} in lexicographical order:
{1,2,3,3}
{1,3,2,3}
{1,3,3,2}
{2,1,3,3}
…
{3,3,2,1}
We can notice two things:
The lexicographically smallest permutation is actually the sequence having its elements sorted in ascending order.
All the other permutations are NOT sorted.
So we can rephrase the problem to finding the second lexicographically smallest permutation, if there exists one.
(in further text, when talking about the smallest permutations I mean lexicographically smallest)
When there doesn’t exist the second smallest permutation?
Very easy to answer - when all the numbers in the sequence are the same. Then there is only one possible permutation of the sequence.
In such case we return an empty array.
How to find the smallest permutation?
As mentioned above, just sort the elements of the sequence in ascending order.
How to find the second smallest permutation?
Coders using C++ now have a very simple task, searching for the second smallest permutation. Just calling the STL function next_permutation() on the sorted sequence will give the result.
Coders in other languages and the C++ ones, who didn’t know the trick, need to write a couple more lines.
But let me first explain how to solve one very easy test case. Having a sequence consisting of one 1 and one or more 2’s, i.e. {1,2,2,2,…,2}, what is the second smallest permutation on such sequence? It shouldn’t be hard to see that it is {2,1,2,2,…,2}, i.e. swapping the first two elements.
Now back to the general case. Starting from the sorted sequence, to find the second smallest permutation, we obviously don’t want to move with as many numbers as possible from the beginning of the sequence. Let’s state it this way - for each index i, we don’t want to move with element it, unless there is no possibility to unsort the elements on indexes (i+1)…(N-1).
Selecting the biggest i such that s[i] < s[i+1] ensures that elements on indexes (i+1),(i+2),…,(N-1) are the same (otherwise we could pick a bigger i). We already know that it is not possible to unsort the sequence consisting of the same numbers only. So we have to move also with the element on index i. Focusing only on the indexes i…(N-1) we have the sequence {s[i],s[i+1],s[i+1],…,s[i+1]} (remember, that the elements on (i+1)…(N-1) are the same). And this sequence is, omitting the exact values, the same as the one we’ve seen before, i.e. {1,2,2,2,…,2}. And we already know how to optimally unsort that one - by swapping the first two elements.
Code in Java:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
public class UnsortedSequence {
public int[] getUnsorted(int[] s) {
Arrays.sort(s);
// sequence consisting of the same elements
if (s[0] == s[s.length - 1])
return new int[0];
// find the biggest i, such that s[i] < s[i+1]
int i = s.length - 2;
while (s[i] == s[i + 1])
--i;
// swap the first two elements of subsequence i,(i+1)...(N-1)
int temp = s[i];
s[i] = s[i + 1];
s[i + 1] = temp;
return s;
}
}
Code in C++:
1
2
3
4
5
6
7
8
9
class UnsortedSequence {
public: vector < int > getUnsorted(vector < int > s) {
sort(s.begin(), s.end());
if (s[0] == s[s.size() - 1])
return vector < int > ();
next_permutation(s.begin(), s.end());
return s;
}
};
NoRepeatPlaylist
Problem Details
Used as: Division Two - Level Two:
Value | 600 |
Submission Rate | 63 / 1155 (5.45%) |
Success Rate | 26 / 63 (41.27%) |
High Score | _peanut for 530.75 points (10 mins 32 secs) |
Average Score | 315.55 (for 26 correct submissions) |
Used as: Division One - Level One:
Value | 300 |
Submission Rate | 442 / 791 (55.88%) |
Success Rate | 353 / 442 (79.86%) |
High Score | Burunduk1 for 292.61 points (4 mins 32 secs) |
Average Score | 190.95 (for 353 correct submissions) |
This problem turned out to be pretty tough, especially for Div2 coders. But once people got the correct idea, the implementation was pretty straightforward with not many places to make an error.
We will look at two different approaches on how to solve this problem. First approach uses dynamic programming. Second approach uses inclusion-exclusion principle, so a little mathematical background is needed for that one.
But first, we will define some terms, for better clarity:
Completeness rule - Each song has to be played at least once.
Isolation rule - At least M songs have to be played between any two occurrences of the same song.
Distinctness Observation
Any consecutive section consisting of S songs (S <= (M+1)) of the valid playlist contains only pairwise distinct songs, i.e. no two songs in the section are the same.
Proof
Isolation rule says: “At least M songs have to be played between any two occurrences of the same song.”
Thus the shortest valid section containing two same songs has to have at least (M+2) songs.
In this section I will explain the approach using dynamic programming. I will explain it through the recursion with memoization technique, which is basically the same thing.
Let’s imagine for a moment that you are Michael’s cell phone and you are creating a playlist of length P using N available songs. At each moment, you have to make a decision, which song to play next. We can divide the songs in two groups:
group X contains the songs you’ve already played (let XS be its size).
group Y contains the songs you haven’t played, yet (let YS be its size).
When you start creating the playlist, group X is empty and group Y contains all the songs. When you finish the playlist, group X should contain all the songs and group Y should be empty, i.e. you’ve used all the songs.
At each moment of creating the playlist you can do one of two things:
if YS > 0, choose any song from group Y and play it. After this, move the song from Y to X.
Note, that you can choose any song from Y without violating the Isolation rule, because the song is played for a first time and is not colliding with any song previously played.
if XS > M, choose any of the (XS-M) songs from group X that are not violating the Isolation rule and play it.
Now what the hack just happened?
When choosing a song from the group X, we have to be careful not play a song that violates the Isolation rule. Especially we have to pay attention to the last M songs we’ve just played, because if we collide with any of those songs, we violate the rule. From the Distinctness observation we know, that all the last M songs are pairwise different (length of the section is less than (M+1)) and thus there are exactly M different songs in group X, that we can’t play. More optimistically, there are (XS-M) songs that we can play :-)
It also means that XS has to be greater than M. Because otherwise all the songs in group X are forbidden to play (this situation happens at the beginning of the playlist, when we’ve played <= M songs).
So now we have an idea on how to generate a valid playlist. It is worth noticing that it is sufficient to work only with the sizes of X and Y and we don’t need the actual groups. Here is a simple function that recursively counts the number of valid playlists we can create:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
long countPlaylists(int position, int XS, int YS) {
// the playlist is full
if (position == P)
return YS == 0 ? 1 : 0; // check the Completeness rule
long result = 0;
// use the song from Y - there are YS possibilities on which song to play
if (YS > 0)
result += countPlaylists(position + 1, XS + 1, YS - 1) * YS;
// use the song from X - there are (XS-M) possibilities on which song to play
if (XS > M)
result += countPlaylists(position + 1, XS, YS) * (XS - M);
return result % MOD;
}
Although calling countPlaylists(0, 0, N) will return the correct result, it will be sickly slow. But we can notice that the arguments of the function have a rather small values (<= 100). So we put a little memoization trick into our advantage and voila:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
long dp[101][101][101]; // initialized to -1
long countPlaylists(int position, int XS, int YS) {
// the playlist is full
if (position == P)
return YS == 0 ? 1 : 0; // check the Completeness rule
// check the memoization
if (dp[position][XS][YS] != -1)
return dp[position][XS][YS];
long result = 0;
// use the song from Y - there are YS possibilities on which song to play
if (YS > 0)
result += countPlaylists(position + 1, XS + 1, YS - 1) * YS;
// use the song from X - there are (XS-M) possibilities on which song to play
if (XS > M)
result += countPlaylists(position + 1, XS, YS) * (XS - M);
return dp[position][XS][YS] = result % MOD;
}
Now for each possible triplet (position, XS, YS) the function will calculate the answer at most once. We can further improve the current number of states, 100^3 = 1,000,000, with noticing that only arguments (position, XS) are sufficient. It is because at every moment we can calculate YS as (N-XS). This gives us the overall time and space complexity of 100^2 = 10,000.
For clear implementation of pure dynamic programming approach see Gluk’s code.
If you’ve never heard about IEP before, please read the article before you start reading further. Otherwise the further discussion will not make much sense.
Imagine that we relax the problem a little and we will not insist on playing each of N available songs at least once, i.e. we relax the Completeness rule. How would we solve such problem?
Again, imagine that you are Michael’s cell phone and you are creating a playlist for him. For choosing the first M songs to play, you have:
N!/(N-M)! = N*(N-1)*…*(N-M+1) possibilities.
It is because you have N possibilities on what to play as the first song.
Then you have (N-1) possibilities on what to play as the second song (you can’t play what you’ve played last time).
Then you have (N-2) possibilities on what to play as the third song (you can’t play what you’ve played the last two times).
…
After you’ve chosen the first M songs, you will have exactly (N-M) possibilities on what to play as each other song (because at each moment the last M songs are forbidden). So the total number of playlists with the relaxed Completeness rule is:
Pretty simple to code, huh? No dynamic programming involved ;-)
So how do we use this solution to solve our original problem? Well, at first we calculate the number of relaxed playlists using N songs. But because of the relaxed Completeness rule, we counted also some playlists using less than N songs. So for each possible subset of (N-1) songs we subtract the number of relaxed playlists using those (N-1) songs. But each relaxed playlist using (N-2) songs was subtracted too many times, so we need to add them back. Etc, etc. If you’ve read the article on Wikipedia, you should already see that this is exactly where IEP comes into play.
The final number of valid playlists can thus be computed as:
For the clear implementation of this approach see wata’s code.
KingdomReorganization
Problem Details
Used as: Division Two - Level Three:
Value | 950 |
Submission Rate | 56 / 1155 (4.85%) |
Success Rate | 45 / 56 (80.36%) |
High Score | NovaDev for 687.92 points (19 mins 8 secs) |
Average Score | 492.01 (for 45 correct submissions) |
We will represent our kingdom as an undirected graph with N vertices representing N cities of the kingdom. There is an edge between the vertices i and j if and only if there currently leads the road between the cities i and j. We additionally know for each edge e the costs of adding ( build[e]) and removing (destroy[e]) it from a graph.
What the problem statement asks us to do, is to find a spanning tree on the graph for the minimal possible cost. First, let’s have a look on how to solve a special instance of this problem:
In this case, we can ignore the array destroy, because all the costs are non-negative and it would never help to build and then destroy the same road.
The problem, where you are given an empty graph on N vertices, for each edge you know the cost of adding that edge to the graph and you have to find a spanning tree for minimal cost is a well-known minimum spanning tree (MST) problem. There are many algorithms that solve this problem: Kruskal’s algorithm, Prim’s algorithm and Boruvka’s algorithm.
From now on I will suppose you know at least one of those algorithms.
A little advice: From my personal experience, Kruskal’s algorithm is the easiest one to implement during the competition (even when disjoint-set data structure is used).
Now we are going to solve the original problem where the input graph can have any possible structure. So hold your breath and trust me on this one, because we are going to use a little trick in here :-)
Firstly, we destroy all the existing edges in the graph and store the cost of this “massive destruction” in variable massiveCost. Destroying each edge e costs us destroy[e], of course. Now we want to find the MST on this empty graph, using one of the algorithms mentioned in the last section. But… (ALERT! Trick on the radar!)
When we want to add an edge to spanning tree that initially wasn’t in the graph, we will set its cost to build[e], as one would expect. But trying to add an edge that was initially in the graph and we destroyed it, it would make much better sense not to destroy it in the first place. We can emulate this behavior by setting edge’s cost to -destroy[e] (note that -destroy[e] <= build[e], so we will not get the worse result by not using build[e] as its cost). So when we add the cost of destroying the edge and then putting it back, we get 0. You see? It looks like we never even destroyed it.
Setting the costs of the edges as described above, we find the MST on the graph and store its cost to variable mstCost. This MST is the best combination of the edges we’d better not to destroy and the totally new edges we need to add. The final answer to our problem is then massiveCost+mstCost.
Time complexity of this solution is based on the algorithm you choose to find the MST. The sample solution (see below), uses Kruskal’s algorithm, without the disjoint-set data structure and has time complexity O(N^2) - although it can seem that the last for-cycle will produce the complexity O(N^3), when iterating N values for each edge, but the iteration will only happend for the edges added to the spanning tree and there are only (N-1) of them.
Code in Java:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
public class KingdomReorganization {
class Edge implements Comparable < Edge > {
public int a,
b,
cost;
public Edge(int a, int b, int cost) {
this.a = a;
this.b = b;
this.cost = cost;
}
public int compareTo(Edge e) {
return cost - e.cost;
}
}
int getValue(char c) {
if (c >= 'A' && c <= 'Z')
return c - 'A';
return c - 'a' + 26;
}
public int getCost(String[] kingdom, String[] build, String[] destroy) {
int N = kingdom.length, massiveCost = 0, mstCost = 0;
// get the cost of each edge + destroy all the existing edges
ArrayList < Edge > edges = new ArrayList < Edge > ();
for (int i = 0; i < N; ++i)
for (int j = i + 1; j < N; ++j) {
if (kingdom[i].charAt(j) == '0')
edges.add(new Edge(i, j, getValue(build[i].charAt(j))));
else {
int val = getValue(destroy[i].charAt(j));
edges.add(new Edge(i, j, -val));
massiveCost += val;
}
}
// solve the MST on the graph, using Kruskal's algorithm
Collections.sort(edges);
int[] color = new int[N];
for (int i = 0; i < N; ++i)
color[i] = i;
for (int i = 0; i < edges.size(); ++i) {
Edge e = edges.get(i);
// vertices of the edge are not in the same component
if (color[e.a] != color[e.b]) {
mstCost += e.cost;
// recolor the component
int oldColor = color[e.b];
for (int j = 0; j < N; ++j)
if (color[j] == oldColor)
color[j] = color[e.a];
}
}
return massiveCost + mstCost;
}
}
Used as: Division One - Level Two:
Value | 500 |
Submission Rate | 364 / 791 (46.02%) |
Success Rate | 104 / 364 (28.57%) |
High Score | liympanda for 444.52 points (10 mins 18 secs) |
Average Score | 294.90 (for 104 correct submissions) |
This was my personal favourite problem of the round. With more different approaches and much more places to make a mistake, it had a big potential for the challenge phase. Exactly 100 solutions were successfully challenged, but still 160 more solutions were left to fail on the final system testing.
Let me rephrase the problem statement into graph theory, having Pinguos as vertices of our graph and oriented edges representing the transformations of Pinguos. For each transformation of Pinguo i to Pinguo j we will have an oriented edge going from vertex i to vertex j. (Note, that in our graph between some pair of vertices can lead multiple edges in one or both directions. There can also lead multiple loops from some vertex to itself.)
Also for the sake of programming clarity, let’s index Pinguos and vertices from 0. So initially you have a single Pinguo of type 0.
Using Floyd-Warshall algorithm or any other method, we compute an array of boolean values path[][]. path[i][j] will be true if and only if there is a path from vertex i to vertex j using at least one edge. Note that path[i][i] is true, only when vertex i lays on a cycle (we consider simple loops also as the cycles).
Let me also define an array of integers deg[], where deg[i] is an outdegree of the vertex i, i.e. the number of edges leaving the vertex i.
The number of Pinguos grows beyond all bounds if and only if there exists a vertex i such that:
path[0][i] == true (Starting with Pinguo of type 0, at some point we will get Pinguo of type i.)
deg[i] >= 2 (Pinguo of type i makes the number of Pinguos grow.)
path[i][i] == true (Pinguo of type i, can either directly or indirectly generate itself.)
Because we are proving an equivalence, we are going to prove each implication separately (see why).
This one is very easy to see. When there exists Pinguo of type i that generates more than one new Pinguo (deg[i] >= 2) and we get this Pinguo at some point (path[0][i] == true) and once we have it, it will either directly or indirectly generate its own copies (path[i][i] == true), the total number of Pinguos will grow forever.
Non-existence of the vertex satisfying the above three properties means, that each vertex of the graph fails in at least one property. We will pay our attention only to vertices satisfying path[0][i] == true and the vertex 0 (let me remind you that path[i][j] is true if and only if there is a path from vertex i to vertex j using at least one edge). This means that those types of Pinguos we never get, starting with the Pinguo of type 0, are not interesting to us. Let’s now analyze how do the graphs with each vertex failing at least one of the properties 2. or 3. look like?
If we look at the vertices laying on the cycles (satisfying 3.), then each of them has to have deg[i] == 1 (not satysfiing 2.). Having deg[i] == 0 is not allowed by the problem statement. These cycles are disjoint and once you get into such cycle, you can never leave it, because leaving such cycle would require one of the vertices having deg[i] > 1. So we see, that every Pinguo laying on such cycle will not make the total number of Pinguos grow.
Taking out all the cycles from our graph, what will we get? And the million dollar answer is: directed acyclic graph. And when the graph is acyclic, it should be pretty clear, that the number of Pinguos cannot grow toward infinity (as an exercise, try to prove that after at most N steps the number will stop growing). So putting the cycles back to the graph, how does the whole structure look like? It looks like a directed acyclic graph, where from some vertices leads the edges to the simple cycles. And the only place, where the number of Pinguos can actually grow is the acyclic part of the graph. Once we get into the cycle, the number stays the same.
So when we suppose the non-existence of the vertex satisfying all three above-mentioned conditions and discard all the vertices unreachable from vertex 0, we will get a graph structure that can generate only the finite number of Pinguos. Q.E.D.
In a first step we check the existence of the vertex i satisfying the above-mentioned three conditions. If such vertex exists, we return -1. Otherwise we calculate the final finite number of Pinguos by simply simulating the grow process for long enough. And how long is long enough? As I mentioned above, the only place, where the number of Pinguos can grow is the acyclic part of the graph. And if you did the exercise, you know that after at most N steps the number will stop growing.
The time complexity of this solution is O(N^3).
Code in Java:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
package monsterfarm;
import java.math.BigInteger;
import java.util.*;
public class MonsterFarm {
final long MOD = 1000000007;
public int numMonsters(String[] transforms) {
int N = transforms.length;
int[][] graph = new int[N][N];
int[] deg = new int[N];
boolean[][] path = new boolean[N][N];
for (int i = 0; i < N; ++i) {
Scanner s = new Scanner(transforms[i]);
while (s.hasNext()) {
++deg[i];
int next = s.nextInt() - 1;
++graph[i][next];
path[i][next] = true;
}
}
// compute the path array using Floyd-Warshall
for (int k = 0; k < N; ++k)
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
path[i][j] = path[i][j] || (path[i][k] && path[k][j]);
// check the condition for infinity
for (int i = 0; i < N; ++i)
if (path[0][i] && deg[i] >= 2 && path[i][i])
return -1;
// simulate the proccess for N steps
long[][] monsters = new long[2][N];
monsters[0][0] = 1;
for (int step = 0; step < N; ++step) {
Arrays.fill(monsters[1], 0);
for (int i = 0; i < N; ++i)
for (int j = 0; j < N; ++j)
monsters[1][j] = (monsters[1][j] + monsters[0][i] * graph[i][j]) % MOD;
for (int i = 0; i < N; ++i)
monsters[0][i] = monsters[1][i];
}
// sum up the living monsters and return the answer
long result = 0;
for (int i = 0; i < N; ++i)
result += monsters[0][i];
return (int)(result % MOD);
}
}
Based on the above observation: “When the number of Pinguos is finite, it will stop changing after at most N steps”, we can alternatively identify, whether the number of Pinguos grows toward infinity. We can simply simulate the growth process for N and 2*N steps and check, whether the number of Pinguos is the same (as an excercise, try to create a test case where the answer is infinity, but the number of Pinguos changes only once per N steps).
Probably many coders came up with the similar idea, but not so many of them implemented it correctly. The catch is that the actual number of Pinguos can grow as large as 10^70 (maybe a little less, but you got the point - it will overflow 64-bit integers). That’s why many coders calculated the number of Pinguos in modular arithmetic, which can sometimes wrongly identify infinite number of Pinguos as finite (see section below for more details).
So the correct way to implement this solution is to either use BigInteger library (very well suited for Java coders) or from chinese remainder theorem use more big prime-number modulos (8 prime-number modulos of value around 10^9 should be enough).
Now I will explain you how to create a challenge case for the solutions calculating the number of monsters using only the single modulo. The most used modulo was probably the one stated in the problem statement, i.e. 1,000,000,007, so I will create a challenge case for that one.
The key is to create a challenge case where the number of Pinguos grow towards infinity, but the solution calculating the number of monsers using the modulo will not notice it. It happens when the exact number of Pinguos grows in the multiples of the modulo. So first we will first generate 10^9+7 Pinguos and then, every couple of steps, multiply this number by 2:
BuildingReorganization
Problem Details
Used as: Division One - Level Three:
Value | 1000 |
Submission Rate | 6 / 791 (0.76%) |
Success Rate | 1 / 6 (16.67%) |
High Score | ainu7 for 530.12 points (33 mins 53 secs) |
Average Score | 530.12 (for 1 correct submission) |
This problem turned out to be too tricky even for the skilled coders. During the competition, only ainu7 managed to solve it correctly, using a slightly different solution than the one presented in this section.
Let’s solve an easier problem first. Let’s assume that there is only one building to be deconstructed (as opposed to two buildings from the original problem) and its index is A.
To see the solution, let’s take an example: there are three buildings, heights of which are {5, 4, 1}, cost = 2 and A = 0 (we want to deconstruct the leftmost building).
First, it costs us 0 + 1 + 2 + 3 + 4 = 10 coins to lower all the floors of the leftmost building. Now we want to move them on the top of the other two buildings. If we move the floors to the 5th, 6th, 7th, 8th, … floors of the middle building, it would cost us 6, 7, 8, 9, … coins, respectively. (For example, moving to the 5th floor would cost us 2 coins for the ground move and 4 coins for the lift move.) If we move the floors to the 2nd, 3rd, 4th, 5th, … floors of the rightmost building, it would cost us 5, 6, 7, 8, … coins, respectively. Obviously we want to choose the cheapest five moves, i.e. the 5th and 6th floors of the middle building and the 2nd, 3rd and 4th floors of the rightmost building. It costs us 6 + 7 + 5 + 6 + 7 = 31 coins. So altogether we have to pay 10 + 31 = 41 coins.
The core observation from the above example is that greedily making for each floor the cheapest possible move will get us the correct result. Of course the straightforward implementation would not run in time for the given constraints, so we need to do something better.
We can binary search on the cost of the most expensive move we are willing to make. For each such cost, let’s call it expensive, we calculate the number of floors we are able to move, with each move costing us (not including the costs for lowering the floors of the building) at most expensive coins. We can calculate this number in time linear to the number of buildings, i.e. independent on the number of floors.
We want to find the minimal x such that we are able to move at least heights[A] floors, with each move costing us no more than x coins. In the above example x = 7 (because there are five floors with cost <= 7 and there are only three floors with cost <= 6). Then we move as many floors as we can for the cost <= x-1 and the appropriate number of floors for the cost = x.
Now back to the original task, where we have two buildings, A and B, to be deconstructed. Let me remind you that A is located to the left of B, i.e. A < B.
Suppose that one of the A’s floors is moved to the top of the building C and one of the B’s floors is moved to the top of the building D. We claim that in the optimal solution the building D is never located to the left of the building C, i.e. it always holds that C <= D.
Otherwise we could spare some coins by swapping one of the A’s floors from the building C with one of the B’s floors from the building D.
Using the above observation we know, that the optimal solution looks like one of the following two cases:
(1) For some i, buildings 0, 1, … i-1 doesn’t contain B’s floors (they may or may not contain A’s floors), and buildings i, i+1, … N-1 doesn’t contain A’s floors (they may or may not contain B’s floors).
(2) For some i, buildings 0, 1, … i-1 doesn’t contain B’s floors (they may or may not contain A’s floors), and buildings i+1, i+2, … N-1 doesn’t contain A’s floors (they may or may not contain B’s floors). Building i contains both A’s floors and B’s floors.
So searching for the result we can try for each i to optimally solve both cases and store the best answer.
Please note, that by now you should already know how to compute the answers for the first case, when the consecutive regions containing the floors of buildings A and B are disjoint (it is equivalent to the problem with only one building for destruction).
Let’s call the building that contains both A’s floors and B’s floors shared building. Fix the position of shared building and the final height of it. Then we can determine the final height of the other buildings.
Again, I will use an example to ilustrate the solution: there are five buildings, heights of which are {4, 3, 2, 4, 2}, cost = 3, A = 1 and B = 3. (Note that buildings are numbered from 0.)
If we fix shared = 2 and say that the height of it becomes 5, we can determine the height of the other buildings as follows:
First, let’s concentrate on building A. Without loss of generality we can say that the top floor of shared building is one of the A’s floors. We can do this, because we know that shared building contains at least one of the A’s floors and without any changes on the final cost, we can simply place the A’s floor as the last one on shared building.
The cost to move the floor to the top of shared building is 3 + 4 = 7, so the cost to move one of A’s floors to the top of the leftmost building should also be at least 7. (Otherwise we can spare some coins by moving a floor between the leftmost building and shared building.)
So we see that we can distribute at most 4 floors from building A, altogether (1 floor to the leftmost building and at most 3 floors to shared building).
We can use the same argument for building B.
We say that our guess of the height of the shared building was correct, if the following three conditions hold:
(i) The total number of floors we can distribute is at least the sum of number of floors of buildings A and B.
(ii) The number of floors placed on top of buildings 0, 1, …, shared-1 is at most ( A’s height).
(iii) The number of floors placed on top of buildings shared+1, shared+2, …, N-1 is at most (B’s height).
Again we are going to use the binary search to calculate the height of shared building. Among all structures that achieve the minimal cost, we must choose the one whose shared tower’s height is as small as possible. It means that the costs of the topmost floors of other buildings are (the cost of the topmost floor of shared building) + (0 or 1).
ainu7 used ternary search on the number of A’s floors in shared building. You can see his solution and try to prove that it works as an exercise.